11,351 research outputs found

    Graphene as a Novel Single Photon Counting Optical and IR Photodetector

    Full text link
    Bilayer graphene has many unique optoelectronic properties , including a tuneable band gap, that make it possible to develop new and more efficient optical and nanoelectronic devices. We have developed a Monte Carlo simulation for a single photon counting photodetector incorporating bilayer graphene. Our results show that, conceptually it would be feasible to manufacture a single photon counting photodetector (with colour sensitivity) from bilayer graphene for use across both optical and infrared wavelengths. Our concept exploits the high carrier mobility and tuneable band gap associated with a bilayer graphene approach. This allows for low noise operation over a range of cryogenic temperatures, thereby reducing the cost of cryogens with a trade off between resolution and operating temperature. The results from this theoretical study now enable us to progress onto the manufacture of prototype photon counters at optical and IR wavelengths that may have the potential to be groundbreaking in some scientific research applications.Comment: Conference Proceeding in Graphene-Based Technologies, 201

    Chapter 39: Transient Event Notification with VOEvent

    Get PDF
    Events and transients are becoming more and more important in modern astronomy, for example gamma-ray bursts, supernovae, microlensing, and so on. We present the VOEvent infrastructure, for communicating observations of immediate astronomical events with the intention of stimulating rapid and automated follow up from robotic telescopes. The information packet itself will be described, as well as the emerging network that allows authoring, publication, subscription, and global identifiers. VOEvent is a general, standard, flexible, peer-to-peer, robust, secure, scalable solution for this infrastructure: a vision of multiple federated event streams shared by peers and evaluated by decision support

    A Test Suite for High-Performance Parallel Java

    Get PDF
    The Java programming language has a number of features that make it attractive for writing high-quality, portable parallel programs. A pure object formulation, strong typing and the exception model make programs easier to create, debug, and maintain. The elegant threading provides a simple route to parallelism on shared-memory machines. Anticipating great improvements in numerical performance, this paper presents a suite of simple programs that indicate how a pure Java Navier-Stokes solver might perform. The suite includes a parallel Euler solver. We present results from a 32-processor Hewlett-Packard machine and a 4-processor Sun server. While speedup is excellent on both machines, indicating a high-quality thread scheduler, the single-processor performance needs much improvement

    Accounting for the effect of heterogeneous plastic deformation on the formability of aluminium and steel sheets

    Get PDF
    Forming Limit Curves characterise ‘mean’ failure strains of sheet metals. Safety levels from the curves define the deterministic upper limit of the processing and part design window, which can be small for high strength, low formability materials. Effects of heterogeneity of plastic deformation, widely accepted to occur on the microscale, are neglected. Marciniak tests were carried out on aluminium alloys (AA6111-T4, NG5754-O), dual-phase steel (DP600) and mild steel (MS3). Digital image correlation was used to measure the effect of heterogeneity on failure. Heterogeneity, based on strain variance was modelled with the 2-component Gaussian Mixture Model and a framework was proposed to 1) identify the onset of necking and to 2) re-define formability as a probability to failure. The result were ‘forming maps’ in major-minor strain space of contours of constant probability (from probability, P=0 to P=1), which showed how failure risk increased with major strain. The contour bands indicated the unique degree of heterogeneity in each material. NG5754-O had the greatest width (0.07 strain) in plane strain and MS3 the lowest (0.03 strain). This novel characterisation will allow engineers to balance a desired forming window for a component design with the risk to failure of the material

    Survey of the Moths (Lepidoptera) Inhabiting the Funk Bottoms Wildlife Area, Wayne and Ashland Counties, Ohio

    Get PDF
    Author Institution: Department of Entomology, Ohio Agricultural Research and Development Center, The Ohio State UniversityIn 1995, the Funk Bottoms Wildlife Area was the subject of an ongoing series of insect surveys intended to establish benchmark information on arthropod diversity of wetlands in northeast Ohio. This article concentrates on the moths which were collected at ultraviolet light traps within the Funk Bottoms Wildlife Area. A companion report will follow focusing on the Coleoptera along with several orders of aquatic insects. 3252 specimens were identified to 306 species in 19 families. These species are classified as follows: Abundant = 34; Locally Abundant = 1; Common = 257; Locally Common = 2; Uncommon = 10; Rare = 1; and Special Interest = 1

    Grist: Grid-based Data Mining for Astronomy

    Get PDF
    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a work ow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the "hyperatlas" project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization

    Norovirus whole genome sequencing by SureSelect target enrichment: a robust and sensitive method

    Get PDF
    Norovirus full genome sequencing is challenging due to sequence heterogeneity between genomes. Previous methods have relied on PCR amplification, which is problematic due to primer design, and RNASeq which non-specifically sequences all RNA in a stool specimen, including host and bacterial RNA.Target enrichment uses a panel of custom-designed 120-mer RNA baits which are complementary to all publicly available norovirus sequences, with multiple baits targeting each position of the genome, thus overcoming the challenge of primer design. Norovirus genomes are enriched from stool RNA extracts to minimise sequencing non-target RNA.SureSelect target enrichment and Illumina sequencing was used to sequence full genomes from 507 norovirus positive stool samples with RT-qPCR Ct values 10-43. Sequencing on an Illumina MiSeq in batches of 48 generated on average 81% on-target-reads per sample and 100% genome coverage with >12,000-fold read depth. Samples included genotypes GI.1, GI.2, GI.3, GI.6, GI.7, GII.1, GII.2, GII.3, GII.4, GII.5, GII.6, GII.7, GII.13, GII.14 and GII.17. Once outliers are accounted for, we generate over 80% genome coverage for all positive samples, regardless of Ct value.164 samples were tested in parallel with conventional PCR genotyping of the capsid shell domain. 164/164 samples were successfully sequenced, compared to 158/164 that were amplified by PCR. Four of the samples that failed capsid PCR had low titres, suggesting target enrichment is more sensitive than gel-based PCR. Two samples failed PCR due to primer mismatches; target enrichment uses multiple baits targeting each position, thus accommodating sequence heterogeneity between norovirus genomes

    Strategies for parallel and numerical scalability of CFD codes

    Get PDF
    In this article we discuss a strategy for speeding up the solution of the Navier—Stokes equations on highly complex solution domains such as complete aircraft, spacecraft, or turbomachinery equipment. We have used a finite-volume code for the (non-turbulent) Navier—Stokes equations as a testbed for implementation of linked numerical and parallel processing techniques. Speedup is achieved by the Tangled Web of advanced grid topology generation, adaptive coupling, and sophisticated parallel computing techniques. An optimized grid topology is used to generate an optimized grid: on the block level such a grid is unstructured whereas within a block a structured mesh is constructed, thus retaining the geometrical flexibility of the finite element method while maintaining the numerical efficiency of the finite difference technique. To achieve a steady state solution, we use grid-sequencing: proceeding from coarse to finer grids, where the scheme is explicit in time. Adaptive coupling is derived from the observation that numerical schemes have differing efficiency during the solution process. Coupling strength between grid points is increased by using an implicit scheme at the sub-block level, then at the block level, ultimately fully implicit across the whole computational domain. Other techniques include switching numerical schemes and the physics model during the solution, and dynamic deactivation of blocks. Because the computational work per block is very variable with adaptive coupling, especially for very complex flows, we have implemented parallel dynamic load-balancing to dynamically transfer blocks between processors. Several 2D and 3D examples illustrate the functioning of the Tangled Web approach on different parallel architectures

    High-Throughput and Cost-Effective Characterization of Induced Pluripotent Stem Cells.

    Get PDF
    Reprogramming somatic cells to induced pluripotent stem cells (iPSCs) offers the possibility of studying the molecular mechanisms underlying human diseases in cell types difficult to extract from living patients, such as neurons and cardiomyocytes. To date, studies have been published that use small panels of iPSC-derived cell lines to study monogenic diseases. However, to study complex diseases, where the genetic variation underlying the disorder is unknown, a sizable number of patient-specific iPSC lines and controls need to be generated. Currently the methods for deriving and characterizing iPSCs are time consuming, expensive, and, in some cases, descriptive but not quantitative. Here we set out to develop a set of simple methods that reduce cost and increase throughput in the characterization of iPSC lines. Specifically, we outline methods for high-throughput quantification of surface markers, gene expression analysis of in vitro differentiation potential, and evaluation of karyotype with markedly reduced cost
    • …
    corecore